102 research outputs found

    Distortions of Subjective Time Perception Within and Across Senses

    Get PDF
    Background: The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. Methodology/Findings: We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. Conclusions/Significance: These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions

    How Bodies and Voices Interact in Early Emotion Perception

    Get PDF
    Successful social communication draws strongly on the correct interpretation of others' body and vocal expressions. Both can provide emotional information and often occur simultaneously. Yet their interplay has hardly been studied. Using electroencephalography, we investigated the temporal development underlying their neural interaction in auditory and visual perception. In particular, we tested whether this interaction qualifies as true integration following multisensory integration principles such as inverse effectiveness. Emotional vocalizations were embedded in either low or high levels of noise and presented with or without video clips of matching emotional body expressions. In both, high and low noise conditions, a reduction in auditory N100 amplitude was observed for audiovisual stimuli. However, only under high noise, the N100 peaked earlier in the audiovisual than the auditory condition, suggesting facilitatory effects as predicted by the inverse effectiveness principle. Similarly, we observed earlier N100 peaks in response to emotional compared to neutral audiovisual stimuli. This was not the case in the unimodal auditory condition. Furthermore, suppression of beta–band oscillations (15–25 Hz) primarily reflecting biological motion perception was modulated 200–400 ms after the vocalization. While larger differences in suppression between audiovisual and audio stimuli in high compared to low noise levels were found for emotional stimuli, no such difference was observed for neutral stimuli. This observation is in accordance with the inverse effectiveness principle and suggests a modulation of integration by emotional content. Overall, results show that ecologically valid, complex stimuli such as joined body and vocal expressions are effectively integrated very early in processing

    Chronic CaMKII inhibition blunts the cardiac contractile response to exercise training

    Get PDF
    Activation of the multifunctional Ca2+/calmodulin-dependent protein kinase II (CaMKII) plays a critical role modulating cardiac function in both health and disease. Here, we determined the effect of chronic CaMKII inhibition during an exercise training program in healthy mice. CaMKII was inhibited by KN-93 injections. Mice were randomized to the following groups: sham sedentary, sham exercise, KN-93 sedentary, and KN-93 exercise. Cardiorespiratory function was evaluated by ergospirometry during treadmill running, echocardiography, and cardiomyocyte fractional shortening and calcium handling. The results revealed that KN-93 alone had no effect on exercise capacity or fractional shortening. In sham animals, exercise training increased maximal oxygen uptake by 8% (p < 0.05) compared to a 22% (p < 0.05) increase after exercise in KN-93 treated mice (group difference p < 0.01). In contrast, in vivo fractional shortening evaluated by echocardiography improved after exercise in sham animals only: from 25 to 32% (p < 0.02). In inactive mice, KN-93 reduced rates of diastolic cardiomyocyte re-lengthening (by 25%, p < 0.05) as well as Ca2+ transient decay (by 16%, p < 0.05), whereas no such effect was observed after exercise training. KN-93 blunted exercise training response on cardiomyocyte fractional shortening (63% sham vs. 18% KN-93; p < 0.01 and p < 0.05, respectively). These effects could not be solely explained by the Ca2+ transient amplitude, as KN-93 reduced it by 20% (p < 0.05) and response to exercise training was equal (64% sham and 47% KN-93; both p < 0.01). We concluded that chronic CaMKII inhibition increased time to 50% re-lengthening which were recovered by exercise training, but paradoxically led to a greater increase in maximal oxygen uptake compared to sham mice. Thus, the effect of chronic CaMKII inhibition is multifaceted and of a complex nature

    Judging Time-to-Passage of looming sounds: evidence for the use of distance-based information

    Get PDF
    Perceptual judgments are an essential mechanism for our everyday interaction with other moving agents or events. For instance, estimation of the time remaining before an object contacts or passes us is essential to act upon or to avoid that object. Previous studies have demonstrated that participants use different cues to estimate the time to contact or the time to passage of approaching visual stimuli. Despite the considerable number of studies on the judgment of approaching auditory stimuli, not much is known about the cues that guide listeners’ performance in an auditory Time-to-Passage (TTP) task. The present study evaluates how accurately participants judge approaching white-noise stimuli in a TTP task that included variable occlusion periods (portion of the presentation time where the stimulus is not audible). Results showed that participants were able to accurately estimate TTP and their performance, in general, was weakly affected by occlusion periods. Moreover, we looked into the psychoacoustic variables provided by the stimuli and analysed how binaural cues related with the performance obtained in the psychophysical task. The binaural temporal difference seems to be the psychoacoustic cue guiding participants’ performance for lower amounts of occlusion, while the binaural loudness difference seems to be the cue guiding performance for higher amounts of occlusion. These results allowed us to explain the perceptual strategies used by participants in a TTP task (maintaining accuracy by shifting the informative cue for TTP estimation), and to demonstrate that the psychoacoustic cue guiding listeners’ performance changes according to the occlusion period.This study was supported by: Bial FoundationGrant 143/14 (https://www.bial.com/en/bial_foundation.11/11th_symposium.219/ fellows_preliminary_results.235/fellows_ preliminary_results.a569.html); FCT PTDC/EEAELC/112137/2009 (https://www.fct.pt/apoios/projectos/consulta/vglobal_projecto?idProjecto=112137&idElemConcurso=3628); and COMPETE: POCI-01-0145-FEDER-007043 and FCT – Fundação para a Ciência e Tecnologia within the Project Scope: UID/CEC/00319/2013.info:eu-repo/semantics/publishedVersio

    The Natural Statistics of Audiovisual Speech

    Get PDF
    Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2–7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver

    Heterochrony and Cross-Species Intersensory Matching by Infant Vervet Monkeys

    Get PDF
    Understanding the evolutionary origins of a phenotype requires understanding the relationship between ontogenetic and phylogenetic processes. Human infants have been shown to undergo a process of perceptual narrowing during their first year of life, whereby their intersensory ability to match the faces and voices of another species declines as they get older. We investigated the evolutionary origins of this behavioral phenotype by examining whether or not this developmental process occurs in non-human primates as well.We tested the ability of infant vervet monkeys (Cercopithecus aethiops), ranging in age from 23 to 65 weeks, to match the faces and voices of another non-human primate species (the rhesus monkey, Macaca mulatta). Even though the vervets had no prior exposure to rhesus monkey faces and vocalizations, our findings show that infant vervets can, in fact, recognize the correspondence between rhesus monkey faces and voices (but indicate that they do so by looking at the non-matching face for a greater proportion of overall looking time), and can do so well beyond the age of perceptual narrowing in human infants. Our results further suggest that the pattern of matching by vervet monkeys is influenced by the emotional saliency of the Face+Voice combination. That is, although they looked at the non-matching screen for Face+Voice combinations, they switched to looking at the matching screen when the Voice was replaced with a complex tone of equal duration. Furthermore, an analysis of pupillary responses revealed that their pupils showed greater dilation when looking at the matching natural face/voice combination versus the face/tone combination.Because the infant vervets in the current study exhibited cross-species intersensory matching far later in development than do human infants, our findings suggest either that intersensory perceptual narrowing does not occur in Old World monkeys or that it occurs later in development. We argue that these findings reflect the faster rate of neural development in monkeys relative to humans and the resulting differential interaction of this factor with the effects of early experience

    First Data Release of the COSMOS Ly alpha Mapping and Tomography Observations: 3D Ly alpha Forest Tomography at 2.05 < z < 2.55

    Get PDF
    Faint star-forming galaxies at z ~ 2–3 can be used as alternative background sources to probe the Lyα forest in addition to quasars, yielding high sightline densities that enable 3D tomographic reconstruction of the foreground absorption field. Here, we present the first data release from the COSMOS Lyα Mapping And Tomography Observations (CLAMATO) Survey, which was conducted with the LRIS spectrograph on the Keck I telescope. Over an observational footprint of 0.157 deg2 within the COSMOS field, we used 240 galaxies and quasars at 2.17 < z < 3.00, with a mean comoving transverse separation of 2.37h1Mpc2.37\,{h}^{-1}\,\mathrm{Mpc}, as background sources probing the foreground Lyα forest absorption at 2.05 < z < 2.55. The Lyα forest data was then used to create a Wiener-filtered tomographic reconstruction over a comoving volume of 3.15×105h3Mpc33.15\,\times {10}^{5}\,{h}^{-3}\,{\mathrm{Mpc}}^{3} with an effective smoothing scale of 2.5h1Mpc2.5\,{h}^{-1}\,\mathrm{Mpc}. In addition to traditional figures, this map is also presented as a virtual-reality visualization and manipulable interactive figure. We see large overdensities and underdensities that visually agree with the distribution of coeval galaxies from spectroscopic redshift surveys in the same field, including overdensities associated with several recently discovered galaxy protoclusters in the volume. Quantitatively, the map signal-to-noise is S/Nwiener3.4{\rm{S}}/{{\rm{N}}}^{\mathrm{wiener}}\approx 3.4 over a 3 h −1Mpc top-hat kernel based on the variances estimated from the Wiener filter. This data release includes the redshift catalog, reduced spectra, extracted Lyα forest pixel data, and reconstructed tomographic map of the absorption. These can be downloaded from Zenodo (10.5281/zenodo.1292459)

    Complement component 3 (C3) expression in the hippocampus after excitotoxic injury: role of C/EBPβ

    Get PDF
    [Background] The CCAAT/enhancer-binding protein β (C/EBPβ) is a transcription factor implicated in the control of proliferation, differentiation, and inflammatory processes mainly in adipose tissue and liver; although more recent results have revealed an important role for this transcription factor in the brain. Previous studies from our laboratory indicated that CCAAT/enhancer-binding protein β is implicated in inflammatory process and brain injury, since mice lacking this gene were less susceptible to kainic acid-induced injury. More recently, we have shown that the complement component 3 gene (C3) is a downstream target of CCAAT/enhancer-binding protein β and it could be a mediator of the proinflammatory effects of this transcription factor in neural cells.[Methods] Adult male Wistar rats (8–12 weeks old) were used throughout the study. C/EBPβ+/+ and C/EBPβ–/– mice were generated from heterozygous breeding pairs. Animals were injected or not with kainic acid, brains removed, and brain slices containing the hippocampus analyzed for the expression of both CCAAT/enhancer-binding protein β and C3.[Results] In the present work, we have further extended these studies and show that CCAAT/enhancer-binding protein β and C3 co-express in the CA1 and CA3 regions of the hippocampus after an excitotoxic injury. Studies using CCAAT/enhancer-binding protein β knockout mice demonstrate a marked reduction in C3 expression after kainic acid injection in these animals, suggesting that indeed this protein is regulated by C/EBPβ in the hippocampus in vivo.[Conclusions] Altogether these results suggest that CCAAT/enhancer-binding protein β could regulate brain disorders, in which excitotoxic and inflammatory processes are involved, at least in part through the direct regulation of C3.This work was supported by MINECO, Grant SAF2014-52940-R and partially financed with FEDER funds. CIBERNED is funded by the Instituto de Salud Carlos III. JAM-G was supported by CIBERNED. We acknowledge support of the publication fee by the CSIC Open Access Publication Support Initiative through its Unit of Information Resources for Research (URICI).Peer reviewe

    Dissociable Influences of Auditory Object vs. Spatial Attention on Visual System Oscillatory Activity

    Get PDF
    Given that both auditory and visual systems have anatomically separate object identification (“what”) and spatial (“where”) pathways, it is of interest whether attention-driven cross-sensory modulations occur separately within these feature domains. Here, we investigated how auditory “what” vs. “where” attention tasks modulate activity in visual pathways using cortically constrained source estimates of magnetoencephalograpic (MEG) oscillatory activity. In the absence of visual stimuli or tasks, subjects were presented with a sequence of auditory-stimulus pairs and instructed to selectively attend to phonetic (“what”) vs. spatial (“where”) aspects of these sounds, or to listen passively. To investigate sustained modulatory effects, oscillatory power was estimated from time periods between sound-pair presentations. In comparison to attention to sound locations, phonetic auditory attention was associated with stronger alpha (7–13 Hz) power in several visual areas (primary visual cortex; lingual, fusiform, and inferior temporal gyri, lateral occipital cortex), as well as in higher-order visual/multisensory areas including lateral/medial parietal and retrosplenial cortices. Region-of-interest (ROI) analyses of dynamic changes, from which the sustained effects had been removed, suggested further power increases during Attend Phoneme vs. Location centered at the alpha range 400–600 ms after the onset of second sound of each stimulus pair. These results suggest distinct modulations of visual system oscillatory activity during auditory attention to sound object identity (“what”) vs. sound location (“where”). The alpha modulations could be interpreted to reflect enhanced crossmodal inhibition of feature-specific visual pathways and adjacent audiovisual association areas during “what” vs. “where” auditory attention
    corecore